Goto

Collaborating Authors

 computer vision algorithm


Advanced XR-Based 6-DOF Catheter Tracking System for Immersive Cardiac Intervention Training

Annabestani, Mohsen, Sriram, Sandhya, Wong, S. Chiu, Sigaras, Alexandros, Mosadegh, Bobak

arXiv.org Artificial Intelligence

Abstract: Extended Reality (XR) technologies are gaining traction as effective tools for medical training and procedural guidance, particularly in complex cardiac interventions. This paper presents a novel system for real-time 3D tracking and visualization of intracardiac echocardiography (ICE) catheters, with precise measurement of the roll angle. The system's data is integrated into an interactive Unity-based environment, rendered through the Meta Quest 3 XR headset, combining a dynamically tracked catheter with a patient-specific 3D heart model. This immersive environment allows the testing of the importance of 3D depth perception, in comparison to 2D projections, as a form of visualization in XR. Our experimental study, conducted using the ICE catheter with six participants, suggests that 3D visualization is not necessarily beneficial over 2D views offered by the XR system; although all cardiologists saw its utility for pre-operative training, planning, and intra-operative guidance. The proposed system qualitatively shows great promise in transforming catheter-based interventions, particularly ICE procedures, by improving visualization, interactivity, and skill development. Keywords: Percutaneous Cardiac Intervention, Extended Reality, Computer Vision, 3D visualization, ICE catheter, Roll Angle 1. INTRODUCTION Minimally invasive interventions (MII) have revolutionized the field of cardiac care, offering patients reduced recovery times, lower risks of complications, and shorter hospital stays compared to traditional open-heart surgeries. These procedures, such as percutaneous cardiac interventions, rely on the precise navigation of catheters through complex vascular structures and heart chambers[1-6].


UWA360CAM: A 360$^{\circ}$ 24/7 Real-Time Streaming Camera System for Underwater Applications

Pham, Quan-Dung, Zhu, Yipeng, Ha, Tan-Sang, Nguyen, K. H. Long, Hua, Binh-Son, Yeung, Sai-Kit

arXiv.org Artificial Intelligence

Omnidirectional camera is a cost-effective and information-rich sensor highly suitable for many marine applications and the ocean scientific community, encompassing several domains such as augmented reality, mapping, motion estimation, visual surveillance, and simultaneous localization and mapping. However, designing and constructing such a high-quality 360$^{\circ}$ real-time streaming camera system for underwater applications is a challenging problem due to the technical complexity in several aspects including sensor resolution, wide field of view, power supply, optical design, system calibration, and overheating management. This paper presents a novel and comprehensive system that addresses the complexities associated with the design, construction, and implementation of a fully functional 360$^{\circ}$ real-time streaming camera system specifically tailored for underwater environments. Our proposed system, UWA360CAM, can stream video in real time, operate in 24/7, and capture 360$^{\circ}$ underwater panorama images. Notably, our work is the pioneering effort in providing a detailed and replicable account of this system. The experiments provide a comprehensive analysis of our proposed system.


Efficient human-in-loop deep learning model training with iterative refinement and statistical result validation

Zahn, Manuel, Perrin, Douglas P.

arXiv.org Artificial Intelligence

Annotation and labeling of images are some of the biggest challenges in applying deep learning to medical data. Current processes are time and cost-intensive and, therefore, a limiting factor for the wide adoption of the technology. Additionally validating that measured performance improvements are significant is important to select the best model. In this paper, we demonstrate a method for creating segmentations, a necessary part of a data cleaning for ultrasound imaging machine learning pipelines. We propose a four-step method to leverage automatically generated training data and fast human visual checks to improve model accuracy while keeping the time/effort and cost low. We also showcase running experiments multiple times to allow the usage of statistical analysis. Poor quality automated ground truth data and quick visual inspections efficiently train an initial base model, which is refined using a small set of more expensive human-generated ground truth data. The method is demonstrated on a cardiac ultrasound segmentation task, removing background data, including static PHI. Significance is shown by running the experiments multiple times and using the student's t-test on the performance distributions. The initial segmentation accuracy of a simple thresholding algorithm of 92% was improved to 98%. The performance of models trained on complicated algorithms can be matched or beaten by pre-training with the poorer performing algorithms and a small quantity of high-quality data. The introduction of statistic significance analysis for deep learning models helps to validate the performance improvements measured. The method offers a cost-effective and fast approach to achieving high-accuracy models while minimizing the cost and effort of acquiring high-quality training data.


Computer Vision and UPcoming Years…

#artificialintelligence

Computer Vision is an interdisciplinary field that focuses on enabling computers to interpret and understand visual data from the world around us. The field of computer vision has undergone significant advancements in recent years, especially with the rise of deep learning and the availability of large-scale datasets. The increasing accuracy of computer vision algorithms and the expanding use of computer vision applications across various industries have led to a promising future for computer vision. Another area where computer vision is expected to be increasingly utilized is in healthcare. Computer vision can aid in medical diagnosis by analyzing medical images such as X-rays, CT scans, and MRIs.


Computer Vision: What it is and why it matters

#artificialintelligence

In the broadest sense, it is the ability of computers to interpret and understand digital images. This includes everything from identifying objects in an image to understanding the meaning of an image. Computer vision is a rapidly growing field with many potential applications. It is already being used in a number of industries, including healthcare, automotive, and security. And as the technology continues to develop, the potential uses for computer vision are only going to increase.

  Industry: Health & Medicine (0.52)

Learn PyTorch for Deep Learning – Free 26-Hour Course

#artificialintelligence

My comprehensive PyTorch course is now live on the freeCodeCamp.org The best way to learn is by doing. And that's just what we'll do in the Learn PyTorch for Deep Learning: Zero to Mastery course. If you're new to data science and machine learning, consider the course a momentum builder. By the end, you'll be comfortable navigating the PyTorch documentation, reading PyTorch code, writing PyTorch code, searching for things you don't understand and building your own machine learning projects.


Machine Learning Engineer - Remote Tech Jobs

#artificialintelligence

Metropolis develops advanced computer vision and machine learning technology that make mobile commerce remarkable. Our platform is already deployed in hundreds of mobility facilities and industries with billions in opportunities. We're building the digital pipes through which the future of mobile commerce will move. Metropolis is seeking a Machine Learning Engineer to accelerate the development of our computer vision algorithms that would be used to empower our mobility services. Reporting to the technical team lead of Machine Learning, you will be responsible for the development, deployment, and ongoing optimization of the models at the core of our platform.


Computer Vision and Deep Learning for Agriculture - PyImageSearch

#artificialintelligence

The agriculture sector is the foundation of any economy. However, with an increase in population, the agriculture sector will feel pressure and need to scale its supplies several times to cope with the increasing consumption. In addition, uncertain factors like climate change, diseases, and infertile land have propelled the sector to adopt innovative approaches like artificial intelligence to protect and increase crop yield. AI has the potential to change the agriculture sector by helping farmers minimize the risk of diseases, proactively adapt to changing climate conditions, monitor the security of crops using drones, etc., while keeping labor costs down (Figure 1). As a result, the overall AI in the agriculture market is projected to grow from an estimated $1B in 2020 to $4B by 2026, at a compound annual growth rate (CAGR) of 25.5% between 2020 and 2026. This series is about CV and DL for Industrial and Big Business Applications.


Remote Computer Vision Engineer openings in Seattle, United States on August 09, 2022 – Data Science Jobs

#artificialintelligence

Altana is an equal opportunity employer with a commitment to inclusion across race and ethnicity, gender, sexual orientation, age, religion, physical ability, veteran status, and national origin. We offer a comprehensive healthcare package and paid parental leave of 3 months for the primary caregiver and 1 month for the secondary caregiver.


What is artificial intelligence?

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. The words "artificial intelligence" (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a human's ability to solve problems and make connections based on insight, understanding and intuition. Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or -- in some cases -- exceed the capabilities of humans. Older algorithms, when they grow commonplace, tend to be pushed out of the tent.